21 research outputs found

    Development of a tabletop guidance system for educational robots

    Get PDF
    The guidance of a vehicle in an outdoor setting is typically implemented using a Real Time Kinematic Global Positioning System (RTK-GPS) potentially enhanced by auxiliary sensors such as electronic compasses, rotation encoders, gyroscopes, and vision systems. Since GPS does not function in an indoor setting where educational competitions are often held, an alternative guidance system was developed. This article describes a guidance method that contains a laser-based localization system, which uses a robot-borne single laser transmitter spinning in a horizontal plane at an angular velocity up to 81 radians per second. Sensor arrays positioned in the corners of a flat rectangular table with dimensions of 1.22 m × 1.83 m detected the laser beam passages. The relative time differences among the detections of the laser passages gave an indication of the angles of the sensors with respect to the laser beam transmitter on the robot. These angles were translated into Cartesian coordinates. The guidance of the robot was implemented using a uni-directional wireless serial connection and position feedback from the localization system. Three experiments were conducted to test the system: 1) the accuracy of the static localization system was determined while the robot stood still. In this test the average error among valid measurements was smaller than 0.3 %. However, a maximum of 3.7 % of the measurements were invalid due to several causes. 2) The accuracy of the guidance system was assessed while the robot followed a straight line. The average deviation from this straight line was 3.6 mm while the robot followed a path with a length of approximately 0.9 m. 3) The overall performance of the guidance system was studied while the robot followed a complex path consisting of 33 sub-paths. The conclusion was that the system worked reasonably accurate, unless the robot came in close proximity

    Improving obstacle awareness for robotic harvesting of sweet-pepper

    No full text
    Abstract Obstacles are densely spaced in a sweet-pepper crop and they limit the free workspace for a robot that can detach the fruit from the plant. Previous harvesting robots mostly attempted to detach a fruit without using any information of obstacles, thereby reducing the harvest success and damaging the fruit and plant. The hypothesis evaluated in this research is that a robot capable of distinguishing between hard and soft obstacles, and capable of employing this knowledge, improves harvest success and decreases plant damages during harvesting. In line with this hypothesis, the main objective was to develop a sweet-pepper harvesting robot capable of distinguishing between hard and soft obstacles, and of employing this knowledge. As a start, the thesis describes the crop environment of a harvesting robot, reviews all harvesting robots developed for high-value crops, and defines challenges for future development. Based on insights from this review, we explored the ability to distinguish five plant parts. A multi-spectral imaging set-up and artificial lighting were developed and pixels were classified using a decision tree classifier and a feature selection algorithm. Classification performance was found insufficient and therefore post-processing methods were employed to enhance performance and detect plant parts on a blob basis. Still, performance was found insufficient and a focussed study was conducted on stem localization. The imaging set-up and algorithm developed for stem localization were used to provide real stem locations for motion planning simulations. To address the motion planning problem, we developed a new method of selecting the grasp pose of the end-effector. The new method and the stem localization algorithm were both integrated in the harvesting robot, and we tested their contribution to performance. This research is the first to report a performance evaluation of a sweet-pepper harvesting robot tested under greenhouse conditions. The robot was able to harvest sweet-peppers in a commercial greenhouse, but at limited success rates: harvest success was 6% when the Fin Ray end-effector was mounted, and 2% when the Lip-type end-effector was mounted. After simplifying the crop, by removal of fruit clusters and occluding leaves, harvest success was 26% (Fin Ray) and 33% (Lip-Type). Hence, these properties of the crop partly caused the low performance. The cycle time per fruit was commonly 94 s, i.e. a factor of 16 too long compared with an economically feasible time of 6 s. Several recommendations were made to bridge the gap in performance. Additionally, the robot’s novel functionality of stem-dependant determination of the grasp pose was evaluated to respond to the hypothesis. Testing the effect of enabling stem-dependent determination of the grasp pose revealed that, in a simplified crop, grasp success increased from 41% to 61% for the Lip-type end-effector, and stem damage decreased from 19% to 13% for the Fin Ray end-effector. Although these effects seem large, they were not statistically significant and therefore resulted in rejection of the hypothesis. To re-evaluate significance of the effects, more samples should be tested in future work. In conclusion, this PhD research improves the obstacle awareness for robotic harvesting of sweet-pepper by the robot’s capability of perceiving and employing hard obstacles (plant stems), whereas previous harvesting robots either lumped all obstacles in one obstacle class, or did not perceive obstacles. This capability may serve as useful generic functionality for future robots

    Development of a tabletop guidance system for educational robots

    No full text
    The guidance of a vehicle in an outdoor setting is typically implemented using a Real Time Kinematic Global Positioning System (RTK-GPS) potentially enhanced by auxiliary sensors such as electronic compasses, rotation encoders, gyroscopes, and vision systems. Since GPS does not function in an indoor setting where educational competitions are often held, an alternative guidance system was developed. This article describes a guidance method that contains a laser-based localization system, which uses a robot-borne single laser transmitter spinning in a horizontal plane at an angular velocity up to 81 radians per second. Sensor arrays positioned in the corners of a flat rectangular table with dimensions of 1.22 m × 1.83 m detected the laser beam passages. The relative time differences among the detections of the laser passages gave an indication of the angles of the sensors with respect to the laser beam transmitter on the robot. These angles were translated into Cartesian coordinates. The guidance of the robot was implemented using a uni-directional wireless serial connection and position feedback from the localization system. Three experiments were conducted to test the system: 1) the accuracy of the static localization system was determined while the robot stood still. In this test the average error among valid measurements was smaller than 0.3 %. However, a maximum of 3.7 % of the measurements were invalid due to several causes. 2) The accuracy of the guidance system was assessed while the robot followed a straight line. The average deviation from this straight line was 3.6 mm while the robot followed a path with a length of approximately 0.9 m. 3) The overall performance of the guidance system was studied while the robot followed a complex path consisting of 33 sub-paths. The conclusion was that the system worked reasonably accurate, unless the robot came in close proximity

    Pixel classification and post-processing of plant parts using multi-spectral images of sweet-pepper

    No full text
    As part of the development of a sweet-pepper harvesting robot, obstacles should be detected. Objectives were to classify sweet-pepper vegetation into five plant parts: stem, top of a leaf (TL), bottom of a leaf (BL), fruit and petiole (Pet); and to improve classification results by post-processing. A multi-spectral imaging set-up with artificial lighting was developed to acquire images of sweet-pepper plants. The background was segmented from the vegetation and vegetation was classified into five plant parts, through a sequence of four two-class classification problems. True-positive detection rate/scaled false-positive rate achieved, on a pixel basis, were 40.0/179% for stem, 78.7/59.2% for top of a leaf (TL), 68.5/54.8% for bottom of a leaf (BL), 54.5/17.2% for fruit and 49.5/176.0% for petiole (Pet), before post-processing. The opening operations applied were unable to remove false stem detections to an acceptable rate. Also, many false detections of TL (>10%), BL (14%) and Pet (>15%) remained after post-processing, but these false detections are not critical for the application because these three plant parts are soft obstacles. Furthermore, results indicate that TL and BL can be distinghuished. Green fruits were post-processed using a sequence of fill-up, opening and area-based segmentation. Several area-based thresholds were tested and the most effective threshold resulted in a true-positive detection rate, on a blob basis, of 56.7 % and a scaled false-positive detection rate of 6.7 % for green fruits (N=60). Such fruit detection rates are a reasonable starting point to detect obstacles for sweet-pepper harvesting. But, additional work is required to complement the obstacle map into a complete representation of the environment

    Stem localization of sweet-pepper plants using the support wire as a visual cue

    No full text
    A robot arm should avoid collisions with the plant stem when it approaches a candidate sweet-pepper for harvesting. This study therefore aims at stem localization, a topic so far only studied under controlled lighting conditions. Objectives were to develop an algorithm capable of stem localization, using detection of the support wire that is twisted around the stem; to quantitatively evaluate performance of wire detection and stem localization under varying lighting conditions; to determine depth accuracy of stereo-vision under lab and greenhouse conditions. A single colour camera was mounted on a pneumatic slide to record image pairs with a small baseline of 1 cm. Artificial lighting was developed to mitigate disturbances caused by natural lighting conditions. An algorithm consisting of five steps was developed and includes novel components such as adaptive thresholding, use of support wires as a visual cue, use of object-based and 3D features and use of minimum expected stem distance. Wire detection rates (true-positive/scaled false-positive) were more favourable under moderate irradiance (94/5%) than under strong irradiance (74/26%). Error of stem localization was measured, in the horizontal plane, by Euclidean distance. Error was smaller for interpolated segments (0.8 cm), where a support wire was detected, than for extrapolated segments (1.5 cm), where a support wire was not detected. Error increased under strong irradiance. Accuracy of the stereo-vision system (±0.4 cm) met the requirements (±1 cm) in the lab, but not in the greenhouse (±4.5 cm) due to plant movement during recording. The algorithm is probably capable to construct a useful collision map for robotic harvesting, if the issue of inaccurate stereo-vision can be resolved by directions proposed for future work. This is the first study regarding stem localization under varying lighting conditions, and can be useful for future applications in crops that grow along a support wire

    Stem localization of sweet-pepper plants using the support wire as a visual cue

    No full text
    A robot arm should avoid collisions with the plant stem when it approaches a candidate sweet-pepper for harvesting. This study therefore aims at stem localization, a topic so far only studied under controlled lighting conditions. Objectives were to develop an algorithm capable of stem localization, using detection of the support wire that is twisted around the stem; to quantitatively evaluate performance of wire detection and stem localization under varying lighting conditions; to determine depth accuracy of stereo-vision under lab and greenhouse conditions. A single colour camera was mounted on a pneumatic slide to record image pairs with a small baseline of 1 cm. Artificial lighting was developed to mitigate disturbances caused by natural lighting conditions. An algorithm consisting of five steps was developed and includes novel components such as adaptive thresholding, use of support wires as a visual cue, use of object-based and 3D features and use of minimum expected stem distance. Wire detection rates (true-positive/scaled false-positive) were more favourable under moderate irradiance (94/5%) than under strong irradiance (74/26%). Error of stem localization was measured, in the horizontal plane, by Euclidean distance. Error was smaller for interpolated segments (0.8 cm), where a support wire was detected, than for extrapolated segments (1.5 cm), where a support wire was not detected. Error increased under strong irradiance. Accuracy of the stereo-vision system (±0.4 cm) met the requirements (±1 cm) in the lab, but not in the greenhouse (±4.5 cm) due to plant movement during recording. The algorithm is probably capable to construct a useful collision map for robotic harvesting, if the issue of inaccurate stereo-vision can be resolved by directions proposed for future work. This is the first study regarding stem localization under varying lighting conditions, and can be useful for future applications in crops that grow along a support wire

    Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper

    No full text
    Sweet-pepper plant parts should be distinguished to construct an obstacle map to plan collision-free motion for a harvesting manipulator. Objectives were to segment vegetation from the background; to segment non-vegetation objects; to construct a classifier robust to variation among scenes; and to classify vegetation primarily into soft (top of a leaf, bottom of leaf and petiole) and hard obstacles (stem and fruit) and secondarily into five plant parts: stem, top of a leaf, bottom of a leaf, fruit and petiole. A multi-spectral system with artificial lighting was developed to mitigate disturbances caused by natural lighting conditions. The background was successfully segmented from vegetation using a threshold in a near-infrared wavelength (>900 nm). Non-vegetation objects occurring in the scene, including drippers, pots, sticks, construction elements and support wires, were removed using a threshold in the blue wavelength (447 nm). Vegetation was classified, using a Classification and Regression Trees (CART) classifier trained with 46 pixel-based features. The Normalized Difference Index features were the strongest as selected by a Sequential Floating Forward Selection algorithm. A new robust-and-balanced accuracy performance measure PRob was introduced for CART pruning and feature selection. Use of PRob rendered the classifier more robust to variation among scenes because standard deviation among scenes reduced 59% for hard obstacles and 43% for soft obstacles compared with balanced accuracy. Two approaches were derived to classify vegetation: Approach A was based on hard vs. soft obstacle classification and Approach B was based on separability of classes. Approach A (PRob = 58.9) performed slightly better than Approach B (PRob = 56.1). For Approach A, mean true-positive detection rate (standard deviation) among scenes was 59.2 (7.1)% for hard obstacles, 91.5 (4.0)% for soft obstacles, 40.0 (12.4)% for stems, 78.7 (16.0)% for top of a leaf, 68.5 (11.4)% for bottom of a leaf, 54.5 (9.9)% for fruit and 49.5 (13.6)% for petiole. These results are insufficient to construct an accurate obstacle map and suggestions for improvements are described. Nevertheless, this is the first study that reports quantitative performance for classification of several plant parts under varying lighting conditions

    Robust pixel-based classification of obstacles for robotic harvesting of sweet-pepper

    No full text
    Sweet-pepper plant parts should be distinguished to construct an obstacle map to plan collision-free motion for a harvesting manipulator. Objectives were to segment vegetation from the background; to segment non-vegetation objects; to construct a classifier robust to variation among scenes; and to classify vegetation primarily into soft (top of a leaf, bottom of leaf and petiole) and hard obstacles (stem and fruit) and secondarily into five plant parts: stem, top of a leaf, bottom of a leaf, fruit and petiole. A multi-spectral system with artificial lighting was developed to mitigate disturbances caused by natural lighting conditions. The background was successfully segmented from vegetation using a threshold in a near-infrared wavelength (>900 nm). Non-vegetation objects occurring in the scene, including drippers, pots, sticks, construction elements and support wires, were removed using a threshold in the blue wavelength (447 nm). Vegetation was classified, using a Classification and Regression Trees (CART) classifier trained with 46 pixel-based features. The Normalized Difference Index features were the strongest as selected by a Sequential Floating Forward Selection algorithm. A new robust-and-balanced accuracy performance measure PRob was introduced for CART pruning and feature selection. Use of PRob rendered the classifier more robust to variation among scenes because standard deviation among scenes reduced 59% for hard obstacles and 43% for soft obstacles compared with balanced accuracy. Two approaches were derived to classify vegetation: Approach A was based on hard vs. soft obstacle classification and Approach B was based on separability of classes. Approach A (PRob = 58.9) performed slightly better than Approach B (PRob = 56.1). For Approach A, mean true-positive detection rate (standard deviation) among scenes was 59.2 (7.1)% for hard obstacles, 91.5 (4.0)% for soft obstacles, 40.0 (12.4)% for stems, 78.7 (16.0)% for top of a leaf, 68.5 (11.4)% for bottom of a leaf, 54.5 (9.9)% for fruit and 49.5 (13.6)% for petiole. These results are insufficient to construct an accurate obstacle map and suggestions for improvements are described. Nevertheless, this is the first study that reports quantitative performance for classification of several plant parts under varying lighting conditions
    corecore